202 research outputs found

    SISO Space Reference FOM 101

    Get PDF
    This is a tutorial that covers the basic concepts associated with the SISO Space Reference Federation Object Model. It will be given at the SISO 2020 SIW in Orlando, Florida

    SISO Space Reference FOM - Tools and Testing

    Get PDF
    The Simulation Interoperability Standards Organization (SISO) Space Reference Federation Object Model (SpaceFOM) version 1.0 is nearing completion. Earlier papers have described the use of the High Level Architecture (HLA) in Space simulation as well as technical aspects of the SpaceFOM. This paper takes a look at different SpaceFOM tools and how they were used during the development and testing of the standard.The first organizations to develop SpaceFOM-compliant federates for SpaceFOM development and testing were NASA's Johnson Space Center (JSC), the University of Calabria (UNICAL), and Pitch Technologies.JSC is one of NASA's lead centers for human space flight. Much of the core distributed simulation technology development, specifically associated with the SpaceFOM, is done by the NASA Exploration Systems Simulations (NExSyS) team. One of NASA's principal simulation development tools is the Trick Simulation Environment. NASA's NExSyS team has been modifying and using Trick and TrickHLA to help develop and test the SpaceFOM.The System Modeling And Simulation Hub Laboratory (SMASH-Lab) at UNICAL has developed the Simulation Exploration Experience (SEE) HLA Starter kit, that has been used by most SEE teams involved in the distributed simulation of a Moon base. It is particularly useful for the development of federates that are compatible with the SpaceFOM. The HLA Starter Kit is a Java based tool that provides a well-structured framework to simplify the formulation, generation, and execution of SpaceFOM-compliant federates.Pitch Technologies, a company specializing in distributed simulation, is utilizing a number of their existing HLA tools to support development and testing of the SpaceFOM. In addition to the existing tools, Pitch has developed a few SpaceFOM specific federates: Space Master for managing the initialization, execution and pacing of any SpaceFOM federation; EarthEnvironment, a simple Root Reference Publisher; and Space Monitor, a graphical tool for monitoring reference frames and physical entities.Early testing of the SpaceFOM was carried out in the SEE university outreach program, initiated in SISO. Students were given a subset of the FOM, that was later extended. Sample federates were developed and frameworks were developed or adapted to the early FOM versions.As drafts of the standard matured, testing was performed using federates from government, industry, and academia. By mixing federates developed by different teams the standard could be tested with respect to functional correctness, robustness and clarity.These frameworks and federates have been useful when testing and verifying the design of the standard. In addition to this, they have since formed a starting point for developing SpaceFOM-compliant federations in several projects, for example for NASA, ESA as well as SEE

    Relaxed Attention for Transformer Models

    Full text link
    The powerful modeling capabilities of all-attention-based transformer architectures often cause overfitting and - for natural language processing tasks - lead to an implicitly learned internal language model in the autoregressive transformer decoder complicating the integration of external language models. In this paper, we explore relaxed attention, a simple and easy-to-implement smoothing of the attention weights, yielding a two-fold improvement to the general transformer architecture: First, relaxed attention provides regularization when applied to the self-attention layers in the encoder. Second, we show that it naturally supports the integration of an external language model as it suppresses the implicitly learned internal language model by relaxing the cross attention in the decoder. We demonstrate the benefit of relaxed attention across several tasks with clear improvement in combination with recent benchmark approaches. Specifically, we exceed the former state-of-the-art performance of 26.90% word error rate on the largest public lip-reading LRS3 benchmark with a word error rate of 26.31%, as well as we achieve a top-performing BLEU score of 37.67 on the IWSLT14 (DE\rightarrowEN) machine translation task without external language models and virtually no additional model parameters. Code and models will be made publicly available

    Is Bigger Always Better? Lessons Learnt from the Evolution of Deep Learning Architectures for Image Classification

    Get PDF
    There exist numerous scientific contributions to the design of deep learning networks. However, using the right architecture that is suited for a given business problem with all constraints such as memory and inference time requirements can be cumbersome. We reflect on the evolution of the state-of-the-art architectures for convolutional neural networks(CNN) for the case of image classification. We compare architectures regarding classification results, model size, and inference time to discuss the choices of designs for CNN architectures. To maintain scientific comprehensibility, the established ILSVRC benchmark is used as a basis for model selection and benchmark data. The quantitative comparison shows that while the model size and the required inference time correlate with result accuracy across all architectures, there are major trade-offs between those factors. The qualitative analysis further depicts that published models always build on previous research and adopt improved components in either evolutionary or revolutionary ways. Finally, we discuss design and result improvement during the evolution of CNN architectures. Further, we derive practical implications for designing deep learning network

    Yield Prognosis for the Agrarian Management of Vineyards using Deep Learning for Object Counting

    Get PDF
    In various applications, the counting of objects based on image data plays a pivotal role. In this paper we first conducted a literature review to display the state of the art in counting objects and summarized the results by extracting several important concepts that describe the counting problem as well as the solution. In a second step we applied this knowledge to yield prognosis in vineyards, where we used Deep Learning models to detect the objects. While these methods used in the detection step are state of the art and perform very well, several problems are usually introduced by the constraint of only counting an object once in the counting step. We provide a solution for this common problem by identifying unique objects and tracking them throughout a sequence of images in order to avoid counting objects more than once, resulting in an automated yield prognosis model for vineyards

    Power Efficiency for Software Algorithms running on Graphics Processors

    Get PDF
    Abstract in UndeterminedPower efficiency has become the most important consideration for many modern computing devices. In this paper, we examine power efficiency of a range of graphics algorithms on different GPUs. To measure power consumption, we have built a power measuring device that samples currents at a high frequency. Comparing power efficiency of different graphics algorithms is done by measuring power and performance of three different primary rendering algorithms and three different shadow algorithms. We measure these algorithms’ power signatures on a mobile phone, on an integrated CPU and graphics processor, and on high-end discrete GPUs, and then compare power efficiency across both algorithms and GPUs. Our results show that power efficiency is not always proportional to rendering performance and that, for some algorithms, power efficiency varies across different platforms. We also show that for some algorithms, energy efficiency is similar on all platforms

    Raytracing in the compensation of the peripheral optics of the eye

    Get PDF
    Abstract Background: Many people with a visual impairment have only peripheral vision. However, there is limited knowledge of the peripheral optics of the eye and only some measurements are available in this field. Methods: We simulated the paths of peripheral rays through the eye by means of raytracing. Five programs were compared. The OSLO raytracing software proved to be not only the best one in these circumstances but we also found it very well suited to our purpose. Remaining uncertainties are entirely due to a lack of input data about the peripheral part of the optical system of the eye. We designed compensatory optics on the basis of the test results. Results: Lenses have been manufactured in accordance with the calculations made by the program for angles of incidence of 20, 40, and 60 degrees. The lenses are high compensation astigmatic lenses. The results of perimeter examinations of changes in peripheral vision using attachment optics were inconclusive, while tests of the lenses as attachments in front of a fundus camera produced successful preliminary results. Conclusion: The next step is to test peripheral vision compensatory optics in traffic situations (driving simulator). At the same time attempts are being made to find methods and instruments for measuring the peripheral optics of the eye. Keywords: astigmatism, central scotoma, raytracing, macula degeneration, peripheral vision

    Efficient multi-view ray tracing using edge detection and shader reuse

    Get PDF
    Stereoscopic rendering and 3D stereo displays are quickly becoming mainstream. The natural extension is autostereoscopic multi-view displays, which by the use of parallax barriers or lenticular lenses, can accommodate many simultaneous viewers without the need for active or passive glasses. As these displays, for the foreseeable future, will support only a rather limited number of views, there is a need for high-quality interperspective antialiasing. We present a specialized algorithm for efficient multi-view image generation from a camera line using ray tracing, which builds on previous methods for multi-dimensional adaptive sampling and reconstruction of light elds. We introduce multi-view silhouette edges to detect sharp geometrical discontinuities in the radiance function. These are used to significantly improve the quality of the reconstruction. In addition, we exploit shader coherence by computing analytical visibility between shading points and the camera line, and by sharing shading computations over the camera line
    corecore